Goto

Collaborating Authors

 tracing gradient descent



Estimating Training Data Influence by Tracing Gradient Descent

Neural Information Processing Systems

We introduce a method called TracIn that computes the influence of a training example on a prediction made by the model. The idea is to trace how the loss on the test point changes during the training process whenever the training example of interest was utilized. We provide a scalable implementation of TracIn via: (a) a first-order gradient approximation to the exact computation, (b) saved checkpoints of standard training procedures, and (c) cherry-picking layers of a deep neural network. In contrast with previously proposed methods, TracIn is simple to implement; all it needs is the ability to work with gradients, checkpoints, and loss functions.


Review for NeurIPS paper: Estimating Training Data Influence by Tracing Gradient Descent

Neural Information Processing Systems

Weaknesses: I have some major concerns with the evaluation part of the paper. A simple baseline could be a loss based selection method. Simply select training points based on loss change. A recent paper [DataLens IJCNN 20] shows that a simple loss based selection outperforms both influence functions and representer selection on mislabelled data identification when the mislabeled data is small. As the fraction of mislabelled data increases, influence function works better than loss based method.


Review for NeurIPS paper: Estimating Training Data Influence by Tracing Gradient Descent

Neural Information Processing Systems

I recommend accepting this paper. The authors managed to address the concerns that the reviewers had. Everyone is in agreement that this paper should be accepted.


Estimating Training Data Influence by Tracing Gradient Descent

Neural Information Processing Systems

We introduce a method called TracIn that computes the influence of a training example on a prediction made by the model. The idea is to trace how the loss on the test point changes during the training process whenever the training example of interest was utilized. We provide a scalable implementation of TracIn via: (a) a first-order gradient approximation to the exact computation, (b) saved checkpoints of standard training procedures, and (c) cherry-picking layers of a deep neural network. In contrast with previously proposed methods, TracIn is simple to implement; all it needs is the ability to work with gradients, checkpoints, and loss functions. It applies to any machine learning model trained using stochastic gradient descent or a variant of it, agnostic of architecture, domain and task.